#Asynchronous programming
Explore tagged Tumblr posts
la-principessa-nuova · 2 months ago
Text
Product Owner: We just found out the client is still using Internet Explorer, so we’ll need to add ES5 support. Can you get that done by the end of the day?
Developer: Well I can’t make any promises, but I’ll give you a callback this afternoon and let you know how far along it is.
2 notes · View notes
jcmarchi · 2 months ago
Text
Asynchronous LLM API Calls in Python: A Comprehensive Guide
New Post has been published on https://thedigitalinsider.com/asynchronous-llm-api-calls-in-python-a-comprehensive-guide/
Asynchronous LLM API Calls in Python: A Comprehensive Guide
As developers and dta scientists, we often find ourselves needing to interact with these powerful models through APIs. However, as our applications grow in complexity and scale, the need for efficient and performant API interactions becomes crucial. This is where asynchronous programming shines, allowing us to maximize throughput and minimize latency when working with LLM APIs.
In this comprehensive guide, we’ll explore the world of asynchronous LLM API calls in Python. We’ll cover everything from the basics of asynchronous programming to advanced techniques for handling complex workflows. By the end of this article, you’ll have a solid understanding of how to leverage asynchronous programming to supercharge your LLM-powered applications.
Before we dive into the specifics of async LLM API calls, let’s establish a solid foundation in asynchronous programming concepts.
Asynchronous programming allows multiple operations to be executed concurrently without blocking the main thread of execution. In Python, this is primarily achieved through the asyncio module, which provides a framework for writing concurrent code using coroutines, event loops, and futures.
Key concepts:
Coroutines: Functions defined with async def that can be paused and resumed.
Event Loop: The central execution mechanism that manages and runs asynchronous tasks.
Awaitables: Objects that can be used with the await keyword (coroutines, tasks, futures).
Here’s a simple example to illustrate these concepts:
import asyncio async def greet(name): await asyncio.sleep(1) # Simulate an I/O operation print(f"Hello, name!") async def main(): await asyncio.gather( greet("Alice"), greet("Bob"), greet("Charlie") ) asyncio.run(main())
In this example, we define an asynchronous function greet that simulates an I/O operation with asyncio.sleep(). The main function uses asyncio.gather() to run multiple greetings concurrently. Despite the sleep delay, all three greetings will be printed after approximately 1 second, demonstrating the power of asynchronous execution.
The Need for Async in LLM API Calls
When working with LLM APIs, we often encounter scenarios where we need to make multiple API calls, either in sequence or parallel. Traditional synchronous code can lead to significant performance bottlenecks, especially when dealing with high-latency operations like network requests to LLM services.
Consider a scenario where we need to generate summaries for 100 different articles using an LLM API. With a synchronous approach, each API call would block until it receives a response, potentially taking several minutes to complete all requests. An asynchronous approach, on the other hand, allows us to initiate multiple API calls concurrently, dramatically reducing the overall execution time.
Setting Up Your Environment
To get started with async LLM API calls, you’ll need to set up your Python environment with the necessary libraries. Here’s what you’ll need:
Python 3.7 or higher (for native asyncio support)
aiohttp: An asynchronous HTTP client library
openai: The official OpenAI Python client (if you’re using OpenAI’s GPT models)
langchain: A framework for building applications with LLMs (optional, but recommended for complex workflows)
You can install these dependencies using pip:
pip install aiohttp openai langchain <div class="relative flex flex-col rounded-lg">
Basic Async LLM API Calls with asyncio and aiohttp
Let’s start by making a simple asynchronous call to an LLM API using aiohttp. We’ll use OpenAI’s GPT-3.5 API as an example, but the concepts apply to other LLM APIs as well.
import asyncio import aiohttp from openai import AsyncOpenAI async def generate_text(prompt, client): response = await client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": prompt] ) return response.choices[0].message.content async def main(): prompts = [ "Explain quantum computing in simple terms.", "Write a haiku about artificial intelligence.", "Describe the process of photosynthesis." ] async with AsyncOpenAI() as client: tasks = [generate_text(prompt, client) for prompt in prompts] results = await asyncio.gather(*tasks) for prompt, result in zip(prompts, results): print(f"Prompt: promptnResponse: resultn") asyncio.run(main())
In this example, we define an asynchronous function generate_text that makes a call to the OpenAI API using the AsyncOpenAI client. The main function creates multiple tasks for different prompts and uses asyncio.gather() to run them concurrently.
This approach allows us to send multiple requests to the LLM API simultaneously, significantly reducing the total time required to process all prompts.
Advanced Techniques: Batching and Concurrency Control
While the previous example demonstrates the basics of async LLM API calls, real-world applications often require more sophisticated approaches. Let’s explore two important techniques: batching requests and controlling concurrency.
Batching Requests: When dealing with a large number of prompts, it’s often more efficient to batch them into groups rather than sending individual requests for each prompt. This reduces the overhead of multiple API calls and can lead to better performance.
import asyncio from openai import AsyncOpenAI async def process_batch(batch, client): responses = await asyncio.gather(*[ client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": prompt] ) for prompt in batch ]) return [response.choices[0].message.content for response in responses] async def main(): prompts = [f"Tell me a fact about number i" for i in range(100)] batch_size = 10 async with AsyncOpenAI() as client: results = [] for i in range(0, len(prompts), batch_size): batch = prompts[i:i+batch_size] batch_results = await process_batch(batch, client) results.extend(batch_results) for prompt, result in zip(prompts, results): print(f"Prompt: promptnResponse: resultn") asyncio.run(main())
Concurrency Control: While asynchronous programming allows for concurrent execution, it’s important to control the level of concurrency to avoid overwhelming the API server or exceeding rate limits. We can use asyncio.Semaphore for this purpose.
import asyncio from openai import AsyncOpenAI async def generate_text(prompt, client, semaphore): async with semaphore: response = await client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": prompt] ) return response.choices[0].message.content async def main(): prompts = [f"Tell me a fact about number i" for i in range(100)] max_concurrent_requests = 5 semaphore = asyncio.Semaphore(max_concurrent_requests) async with AsyncOpenAI() as client: tasks = [generate_text(prompt, client, semaphore) for prompt in prompts] results = await asyncio.gather(*tasks) for prompt, result in zip(prompts, results): print(f"Prompt: promptnResponse: resultn") asyncio.run(main())
In this example, we use a semaphore to limit the number of concurrent requests to 5, ensuring we don’t overwhelm the API server.
Error Handling and Retries in Async LLM Calls
When working with external APIs, it’s crucial to implement robust error handling and retry mechanisms. Let’s enhance our code to handle common errors and implement exponential backoff for retries.
import asyncio import random from openai import AsyncOpenAI from tenacity import retry, stop_after_attempt, wait_exponential class APIError(Exception): pass @retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10)) async def generate_text_with_retry(prompt, client): try: response = await client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": prompt] ) return response.choices[0].message.content except Exception as e: print(f"Error occurred: e") raise APIError("Failed to generate text") async def process_prompt(prompt, client, semaphore): async with semaphore: try: result = await generate_text_with_retry(prompt, client) return prompt, result except APIError: return prompt, "Failed to generate response after multiple attempts." async def main(): prompts = [f"Tell me a fact about number i" for i in range(20)] max_concurrent_requests = 5 semaphore = asyncio.Semaphore(max_concurrent_requests) async with AsyncOpenAI() as client: tasks = [process_prompt(prompt, client, semaphore) for prompt in prompts] results = await asyncio.gather(*tasks) for prompt, result in results: print(f"Prompt: promptnResponse: resultn") asyncio.run(main())
This enhanced version includes:
A custom APIError exception for API-related errors.
A generate_text_with_retry function decorated with @retry from the tenacity library, implementing exponential backoff.
Error handling in the process_prompt function to catch and report failures.
Optimizing Performance: Streaming Responses
For long-form content generation, streaming responses can significantly improve the perceived performance of your application. Instead of waiting for the entire response, you can process and display chunks of text as they become available.
import asyncio from openai import AsyncOpenAI async def stream_text(prompt, client): stream = await client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": prompt], stream=True ) full_response = "" async for chunk in stream: if chunk.choices[0].delta.content is not None: content = chunk.choices[0].delta.content full_response += content print(content, end='', flush=True) print("n") return full_response async def main(): prompt = "Write a short story about a time-traveling scientist." async with AsyncOpenAI() as client: result = await stream_text(prompt, client) print(f"Full response:nresult") asyncio.run(main())
This example demonstrates how to stream the response from the API, printing each chunk as it arrives. This approach is particularly useful for chat applications or any scenario where you want to provide real-time feedback to the user.
Building Async Workflows with LangChain
For more complex LLM-powered applications, the LangChain framework provides a high-level abstraction that simplifies the process of chaining multiple LLM calls and integrating other tools. Let’s look at an example of using LangChain with async capabilities:
This example shows how LangChain can be used to create more complex workflows with streaming and asynchronous execution. The AsyncCallbackManager and StreamingStdOutCallbackHandler enable real-time streaming of the generated content.
import asyncio from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain from langchain.callbacks.manager import AsyncCallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler async def generate_story(topic): llm = OpenAI(temperature=0.7, streaming=True, callback_manager=AsyncCallbackManager([StreamingStdOutCallbackHandler()])) prompt = PromptTemplate( input_variables=["topic"], template="Write a short story about topic." ) chain = LLMChain(llm=llm, prompt=prompt) return await chain.arun(topic=topic) async def main(): topics = ["a magical forest", "a futuristic city", "an underwater civilization"] tasks = [generate_story(topic) for topic in topics] stories = await asyncio.gather(*tasks) for topic, story in zip(topics, stories): print(f"nTopic: topicnStory: storyn'='*50n") asyncio.run(main())
Serving Async LLM Applications with FastAPI
To make your async LLM application available as a web service, FastAPI is an great choice due to its native support for asynchronous operations. Here’s an example of how to create a simple API endpoint for text generation:
from fastapi import FastAPI, BackgroundTasks from pydantic import BaseModel from openai import AsyncOpenAI app = FastAPI() client = AsyncOpenAI() class GenerationRequest(BaseModel): prompt: str class GenerationResponse(BaseModel): generated_text: str @app.post("/generate", response_model=GenerationResponse) async def generate_text(request: GenerationRequest, background_tasks: BackgroundTasks): response = await client.chat.completions.create( model="gpt-3.5-turbo", messages=["role": "user", "content": request.prompt] ) generated_text = response.choices[0].message.content # Simulate some post-processing in the background background_tasks.add_task(log_generation, request.prompt, generated_text) return GenerationResponse(generated_text=generated_text) async def log_generation(prompt: str, generated_text: str): # Simulate logging or additional processing await asyncio.sleep(2) print(f"Logged: Prompt 'prompt' generated text of length len(generated_text)") if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000)
This FastAPI application creates an endpoint /generate that accepts a prompt and returns generated text. It also demonstrates how to use background tasks for additional processing without blocking the response.
Best Practices and Common Pitfalls
As you work with async LLM APIs, keep these best practices in mind:
Use connection pooling: When making multiple requests, reuse connections to reduce overhead.
Implement proper error handling: Always account for network issues, API errors, and unexpected responses.
Respect rate limits: Use semaphores or other concurrency control mechanisms to avoid overwhelming the API.
Monitor and log: Implement comprehensive logging to track performance and identify issues.
Use streaming for long-form content: It improves user experience and allows for early processing of partial results.
0 notes
robomad · 3 months ago
Text
Building Real-Time Notifications with Django Channels: A Step-by-Step Guide
Introduction:In today’s dynamic web applications, real-time features such as notifications, chat, and live updates have become increasingly important to enhance user engagement and experience. Django Channels extends Django to handle WebSockets, allowing you to build real-time applications. By integrating Django Channels into your project, you can implement real-time notifications that alert…
0 notes
justnshalom · 1 year ago
Text
JavaScript: Find Intersection
JavaScript: Find Intersection In JavaScript, finding the intersection of two arrays is a common programming task. The intersection is the set of elements that are present in both arrays. There are several ways to accomplish this in JavaScript, depending on the specific requirements and constraints of your project. Method 1: Using a loop One of the simplest approaches to find the intersection is…
View On WordPress
0 notes
codewithnazam · 2 years ago
Text
Javascript Async & Await with Examples
#javascript #javascripts #javascriptlibraries #javascript30 #javascriptengineer #javascripting #javascript3 #javascriptdeveloper #javascriptiskillingmysoul #javascriptbasics #javascriptdevelopers #javascriptlover
Javascript is a high-level, dynamic, and interpreted programming language that is widely used for developing web applications and creating interactive effects on the web. Async and Await are two powerful features of Javascript that make it possible to write asynchronous code in a way that is much more readable and manageable. In this article, we will discuss what Async and Await are, how they…
Tumblr media
View On WordPress
1 note · View note
nerdyqueerr · 5 months ago
Text
Outsourcing academic decisions to tumblr dot com
Further context for each course:
"Weird Fiction" is an English course that will have me reading and annotating tons of short stories. Reading reviews + syllabus online has led me to believe that the prof is strict as hell and not very generous. However if I do ok on the midterm I get to write a short story instead of a final exam. Online synchronous.
"Religon and the body" is a new religion course which is about. Well. Religion and the body, whether that's death rituals, food purity codes, sexuality, etc. Reviews of the prof seem to say she's flexible but the course has also never been taught before so who knows if the content and evaluation format will work for me. Online asynchronous.
6 notes · View notes
general-george-washington · 8 months ago
Text
only class I am able to take for the summer courses is completely asynchronous
That might as well be an independent study.
2 notes · View notes
aliveandfullofjoy · 2 years ago
Text
cause of death annotated bibliography
4 notes · View notes
sleepydreamybunny · 2 months ago
Text
i would just like to say that there's a special place in hell for instructors who make you respond to peers' posts in a discussion board but the prompt is 'pick one of these topics and write at least four sentences about something you like'.
PROFE. You can't ask for something more than "I liked your post" wheN THERE'S NOTHING TO REPLY TO???
Literally if you want discussion then there kinda has to be sides to take. Points to argue. Like don't get me wrong, most people don't find tax law scintillating but no one is going to tell another student they're wrong for finding it interesting when it's a very key part of this class???
0 notes
mosh2viss · 3 months ago
Text
https://www.futureelectronics.com/p/semiconductors--memory--RAM--static-ram--asynchronous/cy62157ev18ll-55bvxi-infineon-3728565
SRAM memory cell, Types of SRAM, data Memory, memory card reader
CY62157EV18 Series 8 Mb (512 K x 16) 1.65 - 2.25 V 55 ns Static RAM - VFBGA-48
1 note · View note
abr2tello · 5 months ago
Text
https://www.futureelectronics.com/p/semiconductors--memory--RAM--static-ram--asynchronous/cy62167ev30ll-45bvxi-infineon-6042923
Non-Volatile SRAM memory, Non-Volatile SRAM, Non volatile memory
CY62167EV30 Series 16 Mb (1M x 16/2M x 8) 2.2 - 3.6 V 45 ns Static RAM -TSOP-48
1 note · View note
bbob2reaa · 5 months ago
Text
https://www.futureelectronics.com/p/semiconductors--memory--RAM--static-ram--asynchronous/cy62167ev30ll-45bvxit-infineon-1068579
Non Volatile SRAM memory, What is SRAM, SRAM manufacturers, SRAM chip
CY62167EV30 Series 16 Mb (1M x 16 / 2 M x 8) 3 V 45 ns Static RAM - FBGA-48
1 note · View note
robomad · 3 months ago
Text
Mastering Advanced Error Handling Techniques in Node.js Applications
Introduction:Error handling is a critical aspect of building robust Node.js applications. While Node.js provides basic error handling mechanisms, complex applications require more sophisticated techniques to ensure errors are caught, logged, and handled gracefully. Effective error handling not only improves the reliability of your application but also enhances the user experience by providing…
0 notes
thetexvn · 11 months ago
Text
0 notes
arg0t · 1 year ago
Text
i hate asynchronous classes so much jesus christ i cannot do another discussion post
1 note · View note
xbsoftware · 2 years ago
Link
The Difference Between Asynchronous and Synchronous Programming
Multi-threading has enabled wide possibilities and, among other things, brought to life two essential development models. Asynchronous and synchronous programming are two terms that may sound confusing, but understanding their differences helps to spend effort more efficiently and build applications providing outstanding user experience. Today, we’ll learn the major pros and cons of both approaches.
0 notes